ROS 2迅速成为机器人行业的标准。基于DDS作为其默认通信中间件并用于安全至关重要的场景中,将安全性添加到机器人和ROS计算图中越来越引起人们的关注。目前的工作介绍了SROS2,这是一系列开发人员工具和库,可促进ROS 2图添加安全性。为了关注SROS2中以可用性为中心的方法,我们提出了一种在遵循DevSecops模型时系统地保护图形的方法。我们还通过提出了一项应用程序案例研究来证明使用安全工具的使用,该案例研究考虑使用Puctor Navigation2和SLAM Toolbox Stacks在Turtlebot3机器人中应用的图形。我们分析了SROS2的当前功能,并讨论了这些缺点,从而为未来的贡献和扩展提供了见解。最终,我们将SROS2呈现为ROS 2的可用安全工具,并认为如果没有可用性,机器人技术的安全性将受到极大的损害。
translated by 谷歌翻译
机器人中的网络安全是一种新兴的主题,它已经获得了显着的牵引力。研究人员最近展示了网络攻击对机器人的一些潜力和影响。这意味着安全相关的不良后果导致人为的伤害,死亡或导致显着的诚信损失明确克服了古典IT世界的隐私问题。在网络安全研究中,使用漏洞数据库是一种非常可靠的工具,可负责揭示软件产品中的漏洞,并提高供应商的意愿来解决这些问题。在本文中,我们争辩说,现有的漏洞数据库的信息密度不足,并且在机器人中的漏洞中显示了一些偏见的内容。本文介绍了机器人漏洞数据库(RVD),该目录,用于机器人中的错误,弱点和漏洞的负责披露。本文旨在描述RVD后面的设计和过程以及相关的披露政策。此外,作者目前已经包含在RVD中的初步选择漏洞,并呼吁机器人和安全社区,以促进消除机器人中的零天漏洞的贡献。
translated by 谷歌翻译
机器人的不安全状态是在舞台上。有关于主要机器人脆弱性及其不利后果的新兴担忧。但是,机器人和网络安全域之间仍有相当大的差距。为了填补这种差距,目前的技术报告提供了机器人CTF(RCTF),一个在线游乐场,用于从任何浏览器中挑战机器人安全性。我们描述了RCTF的架构,并提供了9个方案,黑客可以挑战不同机器人设置的安全性。我们的工作使安全研究人员提供给a)本地复制虚拟机器人方案,b)将网络设置改为模拟真实机器人目标。我们倡导机器人中的黑客动力安全,并通过开放采购我们的场景贡献。
translated by 谷歌翻译
机器人通常不会以安全为主要问题创建。对比典型IT系统,私人系统依赖于安全性来处理安全方面。鉴于前者,诸如常见漏洞评分系统(CVS)之类的经典评分方法无法准确捕获机器人漏洞的严重程度。目前的研究工作侧重于创建一个开放,自由地访问机器人漏洞评分系统(RVSS),该系统(RVSS)考虑机器人中的主要相关问题,包括a)机器人安全方面,b)对给定漏洞,c)图书馆和第三个漏洞的下游影响的评估-Party评分评估和D)环境变量,例如自漏洞泄露或网络上的曝光率。最后,提供了与CVSS对比的RVSS的实验评估,并侧重于专注于机器人安全景观。
translated by 谷歌翻译
机器人景观正在经历大变化。机器人正在蔓延,很快就会到处。传统上用于工业的系统正在被协作机器人所取代,而在人们的日常活动中介绍了越来越多的专业和消费机器人。机器人越来越多地与它的其他方面交织在一起,并设想以获得更多的自主权,与人类身体相互作用。我们声称,遵循个人计算机(PC)和智能手机,机器人是下一个技术革命,但制造商正在忽略机器人安全性。本文旨在警惕不仅有安全处理的需求,而是从即将到来的技术时代开始的机器人安全性。我们在此提供了一份评论机器人危险的文件,并分析了不面临这些问题的后果。我们强烈地提倡安全 - 首先是必须立即实施的安全方法。
translated by 谷歌翻译
机器人在社会中取得了相关性,越来越越来越关注关键任务。尽管如此,机器人安全性被低估了。机器人安全性是一种复杂的景观,通常需要一个跨纪的横向落后的横向学科视角。要解决此问题,我们介绍了机器人安全框架(RSF),一种方法,用于在机器人中执行系统安全评估。我们提出,调整和开发特定术语,并提供了在四个主要层次(物理,网络,固件和应用程序)之后实现整体安全评估的指南。我们认为现代机器人应视为同样相关的内部和外部沟通安全。最后,我们倡导“通过默默无闻的安全”。我们得出结论,机器人中的安全领域值得进一步的研究努力。
translated by 谷歌翻译
Selecting the number of topics in LDA models is considered to be a difficult task, for which alternative approaches have been proposed. The performance of the recently developed singular Bayesian information criterion (sBIC) is evaluated and compared to the performance of alternative model selection criteria. The sBIC is a generalization of the standard BIC that can be implemented to singular statistical models. The comparison is based on Monte Carlo simulations and carried out for several alternative settings, varying with respect to the number of topics, the number of documents and the size of documents in the corpora. Performance is measured using different criteria which take into account the correct number of topics, but also whether the relevant topics from the DGPs are identified. Practical recommendations for LDA model selection in applications are derived.
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet-Variational Counter Net-a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method.
translated by 谷歌翻译
Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the limitations of relying solely on their parameters to encode a wealth of world knowledge. This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments of 10 models and 4 augmentation methods on PopQA, our new open-domain QA dataset with 14k questions. We find that LMs struggle with less popular factual knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the tail. We then show that retrieval-augmented LMs largely outperform orders of magnitude larger LMs, while unassisted LMs remain competitive in questions about high-popularity entities. Based on those findings, we devise a simple, yet effective, method for powerful and efficient retrieval-augmented LMs, which retrieves non-parametric memories only when necessary. Experimental results show that this significantly improves models' performance while reducing the inference costs.
translated by 谷歌翻译